Goto

Collaborating Authors

 selection strategy





Gradientbasedsampleselectionforonlinecontinual learning

Neural Information Processing Systems

Acontinual learning agent learns online with anon-stationary andnever-ending stream ofdata. Thekeytosuch learning process istoovercome thecatastrophic forgetting of previously seen data, which is a well known problem of neuralnetworks.





SupplementalMaterialforAdaptingSelf-Supervised VisionTransformersbyProbing Attention-ConditionedMaskingConsistency

Neural Information Processing Systems

To compare thequality oftargetsamples being selected fortraining, wemeasure reliability precision (howmanyofthe selected target samples were actually predicted correctly?) We report expected calibration error (ECE [7]), lower is better. We separately visualize features before and after in-domain pretraining with MAE 7and DINO 8. Wenote that these features are completely selfsupervised as the model has not seen task labels yet. Regardless, we observe a small degree of taskdiscriminativeness (examples ofthesame class areclustered together) anddomain invariance (examples of the same class but different domains are close) before additional pretraining. We now measure the degree of label overlap between ImageNet-22K and these 3 benchmarks.